Supermicro Ramps Up Production of NVIDIA Blackwell Rack-Scale Solutions
Latest News
February 10, 2025
Supermicro, Inc. announces full production availability of its end-to-end AI data center Building Block Solutions accelerated by the NVIDIA Blackwell platform. The Supermicro Building Block portfolio provides the core infrastructure elements necessary to scale Blackwell solutions. The portfolio includes a range of air-cooled and liquid-cooled systems with multiple CPU options.
These include thermal design supporting traditional air cooling, liquid-to-liquid (L2L) and liquid-to-air (L2A) cooling. In addition, a full data center management software suite, rack-level integration, including full network switching and cabling and cluster-level L12 solution validation, can be delivered as a turnkey offering with global delivery, professional support and service, Supermicro reports.
“In this transformative moment of AI, where scaling laws are pushing the limits of data center capabilities, our latest NVIDIA Blackwell-powered solutions, developed through close collaboration with NVIDIA, deliver outstanding computational power,” says Charles Liang, president and CEO of Supermicro. “Supermicro's NVIDIA Blackwell GPU offerings in plug-and-play scalable units with advanced liquid cooling and air cooling are empowering customers to deploy an infrastructure that supports increasingly complex AI workloads while maintaining exceptional efficiency.”
Supermicro's NVIDIA HGX B200 8-GPU systems use next-gen liquid-cooling and air-cooling technology, the company notes. The new cold plates and 250kW coolant distribution unit (CDU) more than double the cooling capacity of the previous generation in the same 4U form factor. Available in 42U, 48U, or 52U configurations, the rack-scale design with the new vertical coolant distribution manifolds (CDM) no longer occupy rack units. This enables eight systems, comprising 64 NVIDIA Blackwell GPUs in a 42U rack, and up to 12 systems with 96 NVIDIA Blackwell GPUs in a 52U rack.
The new air-cooled 10U NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs, accoring to Supermicro. Up to 4 of the new 10U air-cooled systems can be installed and fully integrated in a rack.
The new SuperCluster designs incorporate NVIDIA Quantum-2 InfiniBand or NVIDIA Spectrum-X Ethernet networking in a centralized rack, enabling a non-blocking, 256-GPU scalable unit in five racks or an extended 768-GPU scalable unit in nine racks. This architecture—built for NVIDIA HGX B200 systems with native support for the NVIDIA AI Enterprise software platform for developing production-grade, end-to-end agentic AI pipelines—combined with Supermicro's expertise in deploying liquid-cooled data centers delivers efficiency and time-to-online for AI data center projects.
Liquid-Cooled or Air-Cooled
The new liquid-cooled 4U NVIDIA HGX B200 8-GPU system features newly developed cold plates and advanced tubing design that enhance the serviceability of the predecessor that was used for the NVIDIA HGX H100/H200 8-GPU system. Complemented by a new 250kW cooling distribution unit, more than doubling the cooling capacity of the previous generation while maintaining the same 4U form factor, the new rack-scale design with the new vertical coolant distribution manifolds (CDM) enables denser architecture with flexible configuration scenarios used for various data center environments.
Supermicro offers 42U, 48U, or 52U rack configurations for liquid-cooled data centers. The 42U or 48U configuration provides 8 systems and 64-GPU in a rack, and 256-GPU scalable unit in five racks. The 52U rack configuration allows 96-GPU in a rack and enables 768-GPU scalable unit in nine racks for the most advanced AI data center deployments. Supermicro also offers an in-row CDU option for large deployments, as well as liquid-to-air cooling rack solution that doesn't require facility water.
Supermicro's NVIDIA HGX B200 systems natively support NVIDIA AI Enterprise software to accelerate time to production AI. NVIDIA NIM microservices allow organizations to access the latest AI models for fast, secure, and reliable deployment on NVIDIA accelerated infrastructure anywhere — whether in data centers, the cloud or workstations.
For more information, visit www.supermicro.com/AI
Sources: Press materials received from the company and additional information gleaned from the company’s website.
More NVIDIA Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via DE-Editors@digitaleng.news.